Superlinear Convergence of a Modified Newton's Method for Convex Optimization Problems With Constraints

نویسندگان

چکیده

We consider the constrained optimization problem  defined by:
 $$f (x^*) = \min_{x \in  X} f(x)\eqno (1)$$
 
 where function  f : \pmb{\mathbb{R}}^{n} → \pmb{\mathbb{R}} is convex  on a closed bounded convex set X.
 To solve problem (1), most methods transform this into without constraints, either by introducing Lagrange multipliers or projection method.
 The purpose of paper to give new method some problems, based definition descent direction and step while remaining in X domain. A convergence theorem proven. ends with numerical examples.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Two-Level Optimization Problems with Infinite Number of Convex Lower Level Constraints

‎This paper proposes a new form of optimization problem which is a two-level programming problem with infinitely many lower level constraints‎. ‎Firstly‎, ‎we consider some lower level constraint qualifications (CQs) for this problem‎. ‎Then‎, ‎under these CQs‎, ‎we derive formula for estimating the subdifferential of its valued function‎. ‎Finally‎, ‎we present some necessary optimality condit...

متن کامل

Superlinear Convergence of an Interior-Point Method Despite Dependent Constraints

We show that an interior-point method for monotone variational inequalities exhibits superlinear convergence provided that all the standard assumptions hold except for the well-known assumption that the Jacobian of the active constraints has full rank at the solution. We show that superlinear convergence occurs even when the constant-rank condition on the Jacobian assumed in an earlier work doe...

متن کامل

Convergence Analysis of the Gauss-newton Method for Convex Inclusion Problems and Convex Composite Optimization

Using the convex process theory we study the convergence issues of the iterative sequences generated by the Gauss-Newton method for the convex inclusion problem defined by a cone C and a Fréchet differentiable function F (the derivative is denoted by F ′). The restriction in our consideration is minimal and, even in the classical case (the initial point x0 is assumed to satisfy the following tw...

متن کامل

A Polynomial-Time Descent Method for Separable Convex Optimization Problems with Linear Constraints

We propose a polynomial algorithm for a separable convex optimization problem with linear constraints. We do not make any additional assumptions about the structure of the objective function except for polynomial computability. That is, the objective function can be non-differentiable. The running time of our algorithm is polynomial in the size of the input consisting of an instance of the prob...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Mathematics Research

سال: 2021

ISSN: ['1916-9795', '1916-9809']

DOI: https://doi.org/10.5539/jmr.v13n2p90